Goto

Collaborating Authors

 integrated information theory


The Impact of Artificial Intelligence on Human Thought

Gesnot, Rénald

arXiv.org Artificial Intelligence

This research paper examines, from a multidimensional perspective (cognitive, social, ethical, and philosophical), how AI is transforming human thought. It highlights a cognitive offloading effect: the externalization of mental functions to AI can reduce intellectual engagement and weaken critical thinking. On the social level, algorithmic personalization creates filter bubbles that limit the diversity of opinions and can lead to the homogenization of thought and polarization. This research also describes the mechanisms of algorithmic manipulation (exploitation of cognitive biases, automated disinformation, etc.) that amplify AI's power of influence. Finally, the question of potential artificial consciousness is discussed, along with its ethical implications. The report as a whole underscores the risks that AI poses to human intellectual autonomy and creativity, while proposing avenues (education, transparency, governance) to align AI development with the interests of humanity.


The Reflexive Integrated Information Unit: A Differentiable Primitive for Artificial Consciousness

N'guessan, Gnankan Landry Regis, Karambal, Issa

arXiv.org Artificial Intelligence

Research on artificial consciousness lacks the equivalent of the perceptron: a small, trainable module that can be copied, benchmarked, and iteratively improved. We introduce the Reflexive Integrated Information Unit (RIIU), a recurrent cell that augments its hidden state $h$ with two additional vectors: (i) a meta-state $μ$ that records the cell's own causal footprint, and (ii) a broadcast buffer $B$ that exposes that footprint to the rest of the network. A sliding-window covariance and a differentiable Auto-$Φ$ surrogate let each RIIU maximize local information integration online. We prove that RIIUs (1) are end-to-end differentiable, (2) compose additively, and (3) perform $Φ$-monotone plasticity under gradient ascent. In an eight-way Grid-world, a four-layer RIIU agent restores $>90\%$ reward within 13 steps after actuator failure, twice as fast as a parameter-matched GRU, while maintaining a non-zero Auto-$Φ$ signal. By shrinking "consciousness-like" computation down to unit scale, RIIUs turn a philosophical debate into an empirical mathematical problem.


Neuromorphic Correlates of Artificial Consciousness

Ulhaq, Anwaar

arXiv.org Artificial Intelligence

The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.


Neuroscience Weighs in on Physics' Biggest Questions - Issue 107: The Edge

Nautilus

For an empirical science, physics can be remarkably dismissive of some of our most basic observations. We see objects existing in definite locations, but the wave nature of matter washes that away. We perceive time to flow, but how could it, really? We feel ourselves to be free agents, and that's just quaint. Physicists like nothing better than to expose our view of the universe as parochial. But when asked why our impressions are so off, they mumble some excuse and slip out the side door of the party. Physicists, in other words, face the same hard problem of consciousness as neuroscientists do: the problem of bridging objective description and subjective experience. To relate fundamental theory to what we actually observe in the world, they must explain what it means "to observe"--to become conscious of. And they tend to be slapdash about it. They divide the world into "system" and "observer," study the former intensely, and take the latter for granted--or, worse, for a fool.


What we are is more than what we do

Albantakis, Larissa, Tononi, Giulio

arXiv.org Artificial Intelligence

We are witnessing a surge in artificial systems, from autonomous robots to self-driving cars, all of which already display features of autonomy, agency, and goal-directed behavior. With the advent of Artificial General Intelligence (AGI) it is plausible that such artificial autonomous agents (AAA) will display behaviors similar to human autonomous agents consciously pursuing their own goals. The more those agents develop complex and human-like capacities, the more the impetus towards granting them consciousness and associated mental capacities (such as intrinsic motivations and intentions) analogous to humans will grow (Dehaene et al., 2017). In the pervasive functionalist Zeitgeist this is a forgone conclusion; it is only a matter of how rapidly AAA will develop and how sophisticated they will be. Because, once they show the same traits we do, what possibly could be missing?


Is AI already conscious?

#artificialintelligence

The ultimate goal of most high-level AI research is the development of a general artificial intelligence (GAI). In essence, what we want is a synthetic mind that could function the same as a human were it placed into a physical vessel of similar capability. Most experts – not all – believe we're decades away from anything of the sort. Unlike other incredibly complex problems such as nuclear fusion or readjusting the Hubble Constant, nobody really understands yet what GAI actually looks like. Some researchers think Deep Learning is the path to machines that think like humans, others believe we'll need an entirely new calculus to create the necessary "master algorithm," and still others think GAI is probably impossible.


The Mathematical Structure of Integrated Information Theory

Kleiner, Johannes, Tull, Sean

arXiv.org Artificial Intelligence

Integrated Information Theory is one of the leading models of consciousness. It aims to describe both the quality and quantity of the conscious experience of a physical system, such as the brain, in a particular state. In this contribution, we propound the mathematical structure of the theory, separating the essentials from auxiliary formal tools. We provide a definition of a generalized IIT which has IIT 3.0 of Tononi et. al., as well as the Quantum IIT introduced by Zanardi et. al. as special cases. This provides an axiomatic definition of the theory which may serve as the starting point for future formal investigations and as an introduction suitable for researchers with a formal background.


How computers will think

#artificialintelligence

Google and Facebook have an almost perfect log of your comings and goings and they can combine that information with artificial intelligence to predict things about you. The systems are big and sophisticated, but their capabilities are still a far cry from the friendly, charismatic Samantha in the movie Her or the devilish, horrifying HAL in 2001: A Space Odyssey. After all, no one wants a HAL taking us out.) But algorithms are slowly getting smarter. Computer scientists are programming systems that can teach themselves to play Atari games and poker, all on their own.


A Theory of Consciousness Can Help Build a Theory of Everything - Issue 47: Consciousness

Nautilus

For an empirical science, physics can be remarkably dismissive of some of our most basic observations. We see objects existing in definite locations, but the wave nature of matter washes that away. We perceive time to flow, but how could it, really? We feel ourselves to be free agents, and that's just quaint. Physicists like nothing better than to expose our view of the universe as parochial. But when asked why our impressions are so off, they mumble some excuse and slip out the side door of the party. Physicists, in other words, face the same hard problem of consciousness as neuroscientists do: the problem of bridging objective description and subjective experience. To relate fundamental theory to what we actually observe in the world, they must explain what it means "to observe"--to become conscious of. And they tend to be slapdash about it. They divide the world into "system" and "observer," study the former intensely, and take the latter for granted--or, worse, for a fool.


Integrated Information Theory

#artificialintelligence

The Initiative for a Synthesis in Studies of Awareness will organize a two-week Summer School, with plenary lectures in the morning and parallel sessions in the afternoon, in which the lecturers will lead study groups that are aimed at producing original research of publishable quality. The lectures will cover topics in various aspects of neuroscience, experimental as well as computational; theoretical physics; logic and philosophy; and various other fields in cognitive science and the study of complex systems, including artificial intelligence, artificial life, and robotics. We invite graduate students and postdoctoral researchers to participate in the summer school. Organizers will provide lodging for all accepted students and travel support for selected students. Applications will be open until December 25, 2016.